Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
1.
International Journal of Image, Graphics and Signal Processing ; 13(5):1, 2022.
Article in English | ProQuest Central | ID: covidwho-2305937

ABSTRACT

The coronavirus pandemic has been going on since the year 2019, and the trend is still not abating. Therefore, it is particularly important to classify medical CT scans to assist in medical diagnosis. At present, Supervised Deep Learning algorithms have made a great success in the classification task of medical CT scans, but medical image datasets often require professional image annotation, and many research datasets are not publicly available. To solve this problem, this paper is inspired by the self-supervised learning algorithm MAE and uses the MAE model pre-trained on ImageNet to perform transfer learning on CT Scans dataset. This method improves the generalization performance of the model and avoids the risk of overfitting on small datasets. Through extensive experiments on the COVID-CT dataset and the SARS-CoV-2 dataset, we compare the SSL-based method in this paper with other state-of-the-art supervised learning-based pretraining methods. Experimental results show that our method improves the generalization performance of the model more effectively and avoids the risk of overfitting on small datasets. The model achieved almost the same accuracy as supervised learning on both test datasets. Finally, ablation experiments aim to fully demonstrate the effectiveness of our method and how it works.

2.
Expert Systems with Applications ; 225, 2023.
Article in English | Scopus | ID: covidwho-2305858

ABSTRACT

Recently the large-scale influence of Covid-19 promoted the fast development of intelligent tutoring systems (ITS). As a major task of ITS, Knowledge Tracing (KT) aims to capture a student's dynamic knowledge state based on his historical response sequences and provide personalized learning assistance to him. However, most existing KT methods have encountered the data sparsity problem. In real scenarios, an online tutoring system usually has an extensive collection of questions while each student can only interact with a limited number of questions. As a result, the records of some questions could be extremely sparse, which degrades the performance of traditional KT models. To resolve this issue, we propose a Dual-channel Heterogeneous Graph Network (DHGN) to learn informative representations of questions from students' records by capturing both the high-order heterogeneous and local relations. As the supervised learning manner applied in previous methods is incapable of exploiting unobserved relations between questions, we innovatively integrate a self-supervised framework into the KT task and employ contrastive learning via the two channels of DHGN, supplementing as an auxiliary task to improve the KT performance. Moreover, we adopt the attention mechanism, which has achieved impressive performance in natural language processing tasks, to effectively capture students' knowledge state. But the standard attention network is inapplicable to the KT task because the current knowledge state of a student usually shows strong dependency on his recently interactive questions, unlike the situation of language processing tasks, which focus more on the long-term dependency. To avoid the inefficiency of standard attention networks in the KT task, we further devise a novel Hybrid Attentive Network (HAN), which produces both the global attention and the hierarchical local attention to model the long-term and short-term intents, respectively. Then, by the gating network, a student's long-term and short-term intents are combined for efficient prediction. We conduct extensive experiments on several real-world datasets. Experimental results demonstrate that our proposed methods achieve significant performance improvement compared to existing state-of-the-art baselines, which validates the effectiveness of the proposed dual-channel heterogeneous graph framework and hybrid attentive network. © 2023 Elsevier Ltd

3.
Traitement du Signal ; 39(3):893-898, 2022.
Article in English | ProQuest Central | ID: covidwho-2298522

ABSTRACT

Many education facilities have recently switched to online learning due to the COVID-19 pandemic. The nature of online learning makes it easier for dishonest behaviors, such as cheating or lying during lessons. We propose a new artificial intelligence - powered solution to help educators solve this rising problem for a fairer learning environment. We created a visual representation contrastive learning method with the MobileNetV2 network as the backbone to improve predictability from an unlabeled dataset which can be deployed on low power consumption devices. The experiment shows an accuracy of up to 59%, better than several previous research, proving the usability of this approach.

4.
Comput Biol Med ; 159: 106847, 2023 06.
Article in English | MEDLINE | ID: covidwho-2304356

ABSTRACT

BACKGROUND: Convolutional Neural Networks (CNNs) and the hybrid models of CNNs and Vision Transformers (VITs) are the recent mainstream methods for COVID-19 medical image diagnosis. However, pure CNNs lack global modeling ability, and the hybrid models of CNNs and VITs have problems such as large parameters and computational complexity. These models are difficult to be used effectively for medical diagnosis in just-in-time applications. METHODS: Therefore, a lightweight medical diagnosis network CTMLP based on convolutions and multi-layer perceptrons (MLPs) is proposed for the diagnosis of COVID-19. The previous self-supervised algorithms are based on CNNs and VITs, and the effectiveness of such algorithms for MLPs is not yet known. At the same time, due to the lack of ImageNet-scale datasets in the medical image domain for model pre-training. So, a pre-training scheme TL-DeCo based on transfer learning and self-supervised learning was constructed. In addition, TL-DeCo is too tedious and resource-consuming to build a new model each time. Therefore, a guided self-supervised pre-training scheme was constructed for the new lightweight model pre-training. RESULTS: The proposed CTMLP achieves an accuracy of 97.51%, an f1-score of 97.43%, and a recall of 98.91% without pre-training, even with only 48% of the number of ResNet50 parameters. Furthermore, the proposed guided self-supervised learning scheme can improve the baseline of simple self-supervised learning by 1%-1.27%. CONCLUSION: The final results show that the proposed CTMLP can replace CNNs or Transformers for a more efficient diagnosis of COVID-19. In addition, the additional pre-training framework was developed to make it more promising in clinical practice.


Subject(s)
COVID-19 Testing , COVID-19 , Humans , COVID-19/diagnostic imaging , Neural Networks, Computer , Algorithms , Endoscopy
5.
Communications in Mathematical Biology and Neuroscience ; 2023(13), 2023.
Article in English | Scopus | ID: covidwho-2273168

ABSTRACT

Ever since the COVID-19 outbreak, numerous researchers have attempted to train accurate Deep Learning (DL) models, especially Convolutional Neural Networks (CNN), to assist medical personnel in diagnosing COVID-19 infections from Chest X-Ray (CXR) images. However, data imbalance and small dataset sizes have been an issue in training DL models for medical image classification tasks. On the other hand, most researchers focused on complex novel methods instead and few explored this problem. In this research, we demonstrated how Self-Supervised Learning (SSL) can assist DL models during pre-training, and Transfer Learning (TL) can be used in training the models, which can produce models that are more robust to data imbalance. The Swapping Assignment between Views (SwAV) algorithm in particular has been known to be outstanding in enhancing the accuracy of CNN models for classification tasks after TL. By training a ResNet-50 model pre-trained using SwAV on a severely imbalanced CXR dataset, the model managed to greatly outperform its counterpart pre-trained in a standard supervised manner. The SwAV-TL ResNet-50 model attained 0.952 AUROC with 0.821 macro-averaged F1 score when trained on the imbalanced dataset. Hence, it was proven that TL using models pre-trained through SwAV can achieve better accuracy even when the dataset is severely imbalanced, which is usually the case in medical image datasets. © 2023, SCIK Publishing Corporation. All rights reserved.

6.
2022 International Conference on Wearables, Sports and Lifestyle Management, WSLM 2022 ; : 70-75, 2022.
Article in English | Scopus | ID: covidwho-2269838

ABSTRACT

Since the global outbreak of COVID-19, the epidemic has had a great impact on people's lives and the world economy. Diagnosis of COVID-19 using deep learning has become increasingly important due to the inefficiency of traditional RT-PCR test. However, training deep neural networks requires a large amount of manually labeled data, and collecting a large number of COVID-19 CT images is difficult. To address this issue, we explore the effect of Pretext-Invariant Representation Learning (PIRL) using unlabeled datasets to pre-train the network on classification results. In addition, we also explore the prediction effect of PIRL combined with transfer learning (TF). According to the experimental results, applying the TF-PIRL prediction model constructed in this paper to COVID-19 diagnosis, the accuracy and AUC are 0.7734 and 0.8556 respectively, which outperform the network training from scratch, transfer learning-based network training and PIRL-based network training. © 2022 IEEE.

7.
Neural Comput Appl ; 35(15): 10717-10731, 2023.
Article in English | MEDLINE | ID: covidwho-2268951

ABSTRACT

The Coronavirus disease 2019 (COVID-19) has rapidly spread all over the world since its first report in December 2019, and thoracic computed tomography (CT) has become one of the main tools for its diagnosis. In recent years, deep learning-based approaches have shown impressive performance in myriad image recognition tasks. However, they usually require a large number of annotated data for training. Inspired by ground glass opacity, a common finding in COIVD-19 patient's CT scans, we proposed in this paper a novel self-supervised pretraining method based on pseudo-lesion generation and restoration for COVID-19 diagnosis. We used Perlin noise, a gradient noise based mathematical model, to generate lesion-like patterns, which were then randomly pasted to the lung regions of normal CT images to generate pseudo-COVID-19 images. The pairs of normal and pseudo-COVID-19 images were then used to train an encoder-decoder architecture-based U-Net for image restoration, which does not require any labeled data. The pretrained encoder was then fine-tuned using labeled data for COVID-19 diagnosis task. Two public COVID-19 diagnosis datasets made up of CT images were employed for evaluation. Comprehensive experimental results demonstrated that the proposed self-supervised learning approach could extract better feature representation for COVID-19 diagnosis, and the accuracy of the proposed method outperformed the supervised model pretrained on large-scale images by 6.57% and 3.03% on SARS-CoV-2 dataset and Jinan COVID-19 dataset, respectively.

8.
Int J Comput Assist Radiol Surg ; 18(4): 715-722, 2023 Apr.
Article in English | MEDLINE | ID: covidwho-2268672

ABSTRACT

PURPOSE: Considering several patients screened due to COVID-19 pandemic, computer-aided detection has strong potential in assisting clinical workflow efficiency and reducing the incidence of infections among radiologists and healthcare providers. Since many confirmed COVID-19 cases present radiological findings of pneumonia, radiologic examinations can be useful for fast detection. Therefore, chest radiography can be used to fast screen COVID-19 during the patient triage, thereby determining the priority of patient's care to help saturated medical facilities in a pandemic situation. METHODS: In this paper, we propose a new learning scheme called self-supervised transfer learning for detecting COVID-19 from chest X-ray (CXR) images. We compared six self-supervised learning (SSL) methods (Cross, BYOL, SimSiam, SimCLR, PIRL-jigsaw, and PIRL-rotation) with the proposed method. Additionally, we compared six pretrained DCNNs (ResNet18, ResNet50, ResNet101, CheXNet, DenseNet201, and InceptionV3) with the proposed method. We provide quantitative evaluation on the largest open COVID-19 CXR dataset and qualitative results for visual inspection. RESULTS: Our method achieved a harmonic mean (HM) score of 0.985, AUC of 0.999, and four-class accuracy of 0.953. We also used the visualization technique Grad-CAM++ to generate visual explanations of different classes of CXR images with the proposed method to increase the interpretability. CONCLUSIONS: Our method shows that the knowledge learned from natural images using transfer learning is beneficial for SSL of the CXR images and boosts the performance of representation learning for COVID-19 detection. Our method promises to reduce the incidence of infections among radiologists and healthcare providers.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , Pandemics , X-Rays , Thorax , Machine Learning
9.
Comput Biol Med ; 158: 106877, 2023 05.
Article in English | MEDLINE | ID: covidwho-2268671

ABSTRACT

PROBLEM: Detecting COVID-19 from chest X-ray (CXR) images has become one of the fastest and easiest methods for detecting COVID-19. However, the existing methods usually use supervised transfer learning from natural images as a pretraining process. These methods do not consider the unique features of COVID-19 and the similar features between COVID-19 and other pneumonia. AIM: In this paper, we want to design a novel high-accuracy COVID-19 detection method that uses CXR images, which can consider the unique features of COVID-19 and the similar features between COVID-19 and other pneumonia. METHODS: Our method consists of two phases. One is self-supervised learning-based pertaining; the other is batch knowledge ensembling-based fine-tuning. Self-supervised learning-based pretraining can learn distinguished representations from CXR images without manually annotated labels. On the other hand, batch knowledge ensembling-based fine-tuning can utilize category knowledge of images in a batch according to their visual feature similarities to improve detection performance. Unlike our previous implementation, we introduce batch knowledge ensembling into the fine-tuning phase, reducing the memory used in self-supervised learning and improving COVID-19 detection accuracy. RESULTS: On two public COVID-19 CXR datasets, namely, a large dataset and an unbalanced dataset, our method exhibited promising COVID-19 detection performance. Our method maintains high detection accuracy even when annotated CXR training images are reduced significantly (e.g., using only 10% of the original dataset). In addition, our method is insensitive to changes in hyperparameters. CONCLUSION: The proposed method outperforms other state-of-the-art COVID-19 detection methods in different settings. Our method can reduce the workloads of healthcare providers and radiologists.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , Radiologists , Thorax , Upper Extremity , Supervised Machine Learning
10.
Computer Vision, Eccv 2022, Pt Xxi ; 13681:627-643, 2022.
Article in English | Web of Science | ID: covidwho-2233939

ABSTRACT

As segmentation labels are scarce, extensive researches have been conducted to train segmentation networks with domain adaptation, semi-supervised or self-supervised learning techniques to utilize abundant unlabeled dataset. However, these approaches appear different from each other, so it is not clear how these approaches can be combined for better performance. Inspired by recent multi-domain image translation approaches, here we propose a novel segmentation framework using adaptive instance normalization (AdaIN), so that a single generator is trained to perform both domain adaptation and semi-supervised segmentation tasks via knowledge distillation by simply changing task-specific AdaIN codes. Specifically, our framework is designed to deal with difficult situations in chest X-ray radiograph (CXR) segmentation, where labels are only available for normal data, but the trained model should be applied to both normal and abnormal data. The proposed network demonstrates great generalizability under domain shift and achieves the state-of-the-art performance for abnormal CXR segmentation.

11.
Information Sciences ; 624:200-216, 2023.
Article in English | ScienceDirect | ID: covidwho-2165418

ABSTRACT

Recently online intelligent education has caught more and more attention, especially due to the global influence of Covid-19. A major task of intelligent education is Knowledge Tracing (KT) which aims to capture students' dynamic status based on their historical interaction records and predict their responses to new questions. However, most existing KT methods suffer from the record data sparsity problem. In reality, there are a huge number of questions in the online database and students can only interact with a very small set of these questions. The records of some questions could be extremely sparse, which may significantly degrade the performance of traditional KT methods. Although recent graph neural network (GNN) based KT methods can fuse graph-structured information and improve the representation of questions to some extent, the pairwise structure of GNN neglects the complex high-order and heterogeneous relations among questions. To resolve the above issues, we develop a novel KT model with the heterogeneous hypergraph network (HHN) and propose an attentive mechanism, including intra- and inter-graph attentions, to aggregate neighbors' information upon HHN. To further enhance the question representation, we supplement the supervised prediction task of KT with an auxiliary self-supervised task, i.e., we additionally generate an augmented view with adaptive data augmentation to implement contrastive learning and exploit the unobserved relations among questions. We conduct extensive experiments on several real-world datasets. Experimental results demonstrate that our proposed method achieves significant performance improvement compared to some state-of-the-art KT methods.

12.
Front Bioinform ; 1: 693177, 2021.
Article in English | MEDLINE | ID: covidwho-2089806

ABSTRACT

The life-threatening disease COVID-19 has inspired significant efforts to discover novel therapeutic agents through repurposing of existing drugs. Although multi-targeted (polypharmacological) therapies are recognized as the most efficient approach to system diseases such as COVID-19, computational multi-targeted compound screening has been limited by the scarcity of high-quality experimental data and difficulties in extracting information from molecules. This study introduces MolGNN, a new deep learning model for molecular property prediction. MolGNN applies a graph neural network to computational learning of chemical molecule embedding. Comparing to state-of-the-art approaches heavily relying on labeled experimental data, our method achieves equivalent or superior prediction performance without manual labels in the pretraining stage, and excellent performance on data with only a few labels. Our results indicate that MolGNN is robust to scarce training data, and hence a powerful few-shot learning tool. MolGNN predicted several multi-targeted molecules against both human Janus kinases and the SARS-CoV-2 main protease, which are preferential targets for drugs aiming, respectively, at alleviating cytokine storm COVID-19 symptoms and suppressing viral replication. We also predicted molecules potentially inhibiting cell death induced by SARS-CoV-2. Several of MolGNN top predictions are supported by existing experimental and clinical evidence, demonstrating the potential value of our method.

13.
Medical Image Learning with Limited and Noisy Data (Milland 2022) ; 13559:76-85, 2022.
Article in English | Web of Science | ID: covidwho-2085277

ABSTRACT

The role of chest X-ray (CXR) imaging, due to being more cost-effective, widely available, and having a faster acquisition time compared to CT, has evolved during the COVID-19 pandemic. To improve the diagnostic performance of CXR imaging a growing number of studies have investigated whether supervised deep learning methods can provide additional support. However, supervised methods rely on a large number of labeled radiology images, which is a time-consuming and complex procedure requiring expert clinician input. Due to the relative scarcity of COVID-19 patient data and the costly labeling process, self-supervised learning methods have gained momentum and has been proposed achieving comparable results to fully supervised learning approaches. In this work, we study the effectiveness of self-supervised learning in the context of diagnosing COVID-19 disease from CXR images. We propose a multifeature Vision Transformer (ViT) guided architecture where we deploy a cross-attention mechanism to learn information from both original CXR images and corresponding enhanced local phase CXR images. By using 10% labeled CXR scans, the proposed model achieves 91.10% and 96.21% overall accuracy tested on total 35,483 CXR images of healthy (8,851), regular pneumonia (6,045), and COVID-19 (18,159) scans and shows significant improvement over state-of-the-art techniques. Code is available https://github.com/endiqq/Multi-Feature-ViT.

14.
Med Image Comput Comput Assist Interv ; 13434: 556-566, 2022 Sep.
Article in English | MEDLINE | ID: covidwho-2059729

ABSTRACT

Vision transformers efficiently model long-range context and thus have demonstrated impressive accuracy gains in several image analysis tasks including segmentation. However, such methods need large labeled datasets for training, which is hard to obtain for medical image analysis. Self-supervised learning (SSL) has demonstrated success in medical image segmentation using convolutional networks. In this work, we developed a self-distillation learning with masked image modeling method to perform SSL for vision transformers (SMIT) applied to 3D multi-organ segmentation from CT and MRI. Our contribution combines a dense pixel-wise regression pretext task performed within masked patches called masked image prediction with masked patch token distillation to pre-train vision transformers. Our approach is more accurate and requires fewer fine tuning datasets than other pretext tasks. Unlike prior methods, which typically used image sets arising from disease sites and imaging modalities corresponding to the target tasks, we used 3,643 CT scans (602,708 images) arising from head and neck, lung, and kidney cancers as well as COVID-19 for pre-training and applied it to abdominal organs segmentation from MRI pancreatic cancer patients as well as publicly available 13 different abdominal organs segmentation from CT. Our method showed clear accuracy improvement (average DSC of 0.875 from MRI and 0.878 from CT) with reduced requirement for fine-tuning datasets over commonly used pretext tasks. Extensive comparisons against multiple current SSL methods were done. Our code is available at: https://github.com/harveerar/SMIT.git.

15.
8th International Conference on Virtual Reality, ICVR 2022 ; 2022-May:330-336, 2022.
Article in English | Scopus | ID: covidwho-2018879

ABSTRACT

Research on intelligent diagnosis and treatment is a major frontier issue in the current era of medical big data. For the global health crisis COVID-19, the radiological imaging techniques CT can provide useful and important information thus widely preferred due to its merit and three-dimensional view of the lung. However, to classify the CT-slices to assist in diagnosis, due to the annotation by radiologists is a highly subjective task, tedious and time-consuming work often influenced by individual bias and clinical experiences. Moreover, the current image classification methods cannot work well on the massive real-Time totally unlabeled CT scans. To address these challenges, we proposed a transfer learning method using self-supervised information to classify the unlabeled CT images, using an auxiliary task of segmentation to improve classification efficiency. We classified the totally unlabeled CT scans from Huoshenshan Hospital into ordinary, severe and critical cases, and the accuracy rate reached 86%. The experimental results show that the use of small-sample semi-supervised transfer learning algorithm can be used in insufficient CT images. Our framework can improve the learning ability and achieve a higher performance. Extensive experiments on real CT volumes demonstrate that the proposed method outperforms most current models and advances the state-of-The-Art performance. © 2022 IEEE.

16.
Lecture Notes on Data Engineering and Communications Technologies ; 148:406-415, 2022.
Article in English | Scopus | ID: covidwho-2013996

ABSTRACT

This paper applies self-supervised learning to diagnose coronavirus disease (COVID-19) among other pneumonia and normal cases based on chest Computed Tomography (CT) images. Being aware that medical imaging in real-world scenarios lacks well-verified and explicitly labeled datasets, which is known as a big challenge for supervised learning, we utilize Momentum Contrast v2 (MoCo v2) algorithm to pre-train our proposed Self-Supervised Medical Imaging Network (SSL-MedImNet) with remarkable generalization from substantial unlabeled data. The proposed model achieves competitive and promising performance in COVIDx CT-2, which is a well-known and high-quality dataset for COVID-19 assessment. Besides, its pre-trained representations can be transferred well for the diagnosis task. Moreover, SSL-MedImNet approximately matches its supervised candidates COVID-Net CT-1 and COVID-Net CT-2 by small distinctions. In particular, with only some additional dense layers, the proposed model achieves COVID-19 accuracy of 88.3% and specificity of 98.4% approximately, and competitive results for normal and pneumonia cases. The results advocate the potential of self-supervised learning to accomplish highly generalized understanding from unlabeled medical images and then transfer it for relevant supervised tasks in real scenarios. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

17.
Traitement du Signal ; 39(3):893-898, 2022.
Article in English | Scopus | ID: covidwho-1994684

ABSTRACT

Many education facilities have recently switched to online learning due to the COVID-19 pandemic. The nature of online learning makes it easier for dishonest behaviors, such as cheating or lying during lessons. We propose a new artificial intelligence - powered solution to help educators solve this rising problem for a fairer learning environment. We created a visual representation contrastive learning method with the MobileNetV2 network as the backbone to improve predictability from an unlabeled dataset which can be deployed on low power consumption devices. The experiment shows an accuracy of up to 59%, better than several previous research, proving the usability of this approach. © 2022 Lavoisier. All rights reserved.

18.
ACM BCB ; 20222022 Aug.
Article in English | MEDLINE | ID: covidwho-1993099

ABSTRACT

Clinical EHR data is naturally heterogeneous, where it contains abundant sub-phenotype. Such diversity creates challenges for outcome prediction using a machine learning model since it leads to high intra-class variance. To address this issue, we propose a supervised pre-training model with a unique embedded k-nearest-neighbor positive sampling strategy. We demonstrate the enhanced performance value of this framework theoretically and show that it yields highly competitive experimental results in predicting patient mortality in real-world COVID-19 EHR data with a total of over 7,000 patients admitted to a large, urban health system. Our method achieves a better AUROC prediction score of 0.872, which outperforms the alternative pre-training models and traditional machine learning methods. Additionally, our method performs much better when the training data size is small (345 training instances).

19.
Diagnostics (Basel) ; 12(8)2022 Jul 26.
Article in English | MEDLINE | ID: covidwho-1957249

ABSTRACT

BACKGROUND: Automated segmentation of COVID-19 infection lesions and the assessment of the severity of the infections are critical in COVID-19 diagnosis and treatment. Based on a large amount of annotated data, deep learning approaches have been widely used in COVID-19 medical image analysis. However, the number of medical image samples is generally huge, and it is challenging to obtain enough annotated medical images for training a deep CNN model. METHODS: To address these challenges, we propose a novel self-supervised deep learning method for automated segmentation of COVID-19 infection lesions and assessing the severity of infection, which can reduce the dependence on the annotation of the training samples. In the proposed method, first, many unlabeled data are used to pre-train an encoder-decoder model to learn rotation-dependent and rotation-invariant features. Then, a small amount of labeled data is used to fine-tune the pre-trained encoder-decoder for COVID-19 severity classification and lesion segmentation. RESULTS: The proposed methods were tested on two public COVID-19 CT datasets and one self-built dataset. Accuracy, precision, recall, and F1-score were used to measure classification performance and Dice coefficient was used to measure segmentation performance. For COVID-19 severity classification, the proposed method outperformed other unsupervised feature learning methods by about 7.16% in accuracy. For segmentation, when the amount of labeled data was 100%, the Dice value of the proposed method was 5.58% higher than that of U-Net.; in 70% of the cases, our method was 8.02% higher than U-Net; in 30% of the cases, our method was 11.88% higher than U-Net; and in 10% of the cases, our method was 16.88% higher than U-Net. CONCLUSIONS: The proposed method provides better classification and segmentation performance under limited labeled data than other methods.

20.
2nd International Conference on Intelligent Systems and Pattern Recognition, ISPR 2022 ; 1589 CCIS:78-89, 2022.
Article in English | Scopus | ID: covidwho-1930342

ABSTRACT

Most of existing computer vision applications rely on models trained on supervised corpora, this is contradictory to what the world is seeing with the explosion of massive sets of unlabeled data. In the field of medical imaging for example, creating labels is extremely time-consuming because professionals should spend countless hours looking at images to manually annotate, segment, etc. Recently, several works are looking for solutions to the challenge of learning effective visual representations with no human supervision. In this work, we investigate the potential of using a self-supervised learning as a pretraining phase in improving the classification of radiographic images when the amount of available annotated data is small. To do that, we propose to use a self-supervised framework by pretraining a deep encoder with contrastive learning on a chest X-ray dataset using no labels at all, and then fine-tuning it using only few labeled data samples. We experimentally demonstrate that an unsupervised pretraining on unlabeled data is able to learn useful representation from Chest X-ray images, and only few labeled data samples are sufficient to reach the same accuracy of a supervised model learnt on the whole annotated dataset. © 2022, Springer Nature Switzerland AG.

SELECTION OF CITATIONS
SEARCH DETAIL